83 research outputs found
Relative Entropy Relaxations for Signomial Optimization
Signomial programs (SPs) are optimization problems specified in terms of
signomials, which are weighted sums of exponentials composed with linear
functionals of a decision variable. SPs are non-convex optimization problems in
general, and families of NP-hard problems can be reduced to SPs. In this paper
we describe a hierarchy of convex relaxations to obtain successively tighter
lower bounds of the optimal value of SPs. This sequence of lower bounds is
computed by solving increasingly larger-sized relative entropy optimization
problems, which are convex programs specified in terms of linear and relative
entropy functions. Our approach relies crucially on the observation that the
relative entropy function -- by virtue of its joint convexity with respect to
both arguments -- provides a convex parametrization of certain sets of globally
nonnegative signomials with efficiently computable nonnegativity certificates
via the arithmetic-geometric-mean inequality. By appealing to representation
theorems from real algebraic geometry, we show that our sequences of lower
bounds converge to the global optima for broad classes of SPs. Finally, we also
demonstrate the effectiveness of our methods via numerical experiments
Sufficient Dimension Reduction and Modeling Responses Conditioned on Covariates: An Integrated Approach via Convex Optimization
Given observations of a collection of covariates and responses , sufficient dimension reduction (SDR)
techniques aim to identify a mapping
with such that is independent of . The image
summarizes the relevant information in a potentially large number of covariates
that influence the responses . In many contemporary settings, the number
of responses is also quite large, in addition to a large number of
covariates. This leads to the challenge of fitting a succinctly parameterized
statistical model to , which is a problem that is usually not addressed
in a traditional SDR framework. In this paper, we present a computationally
tractable convex relaxation based estimator for simultaneously (a) identifying
a linear dimension reduction of the covariates that is sufficient with
respect to the responses, and (b) fitting several types of structured
low-dimensional models -- factor models, graphical models, latent-variable
graphical models -- to the conditional distribution of . We analyze the
consistency properties of our estimator in a high-dimensional scaling regime.
We also illustrate the performance of our approach on a newsgroup dataset and
on a dataset consisting of financial asset prices.Comment: 34 pages, 1 figur
Fitting Tractable Convex Sets to Support Function Evaluations
The geometric problem of estimating an unknown compact convex set from
evaluations of its support function arises in a range of scientific and
engineering applications. Traditional approaches typically rely on estimators
that minimize the error over all possible compact convex sets; in particular,
these methods do not allow for the incorporation of prior structural
information about the underlying set and the resulting estimates become
increasingly more complicated to describe as the number of measurements
available grows. We address both of these shortcomings by describing a
framework for estimating tractably specified convex sets from support function
evaluations. Building on the literature in convex optimization, our approach is
based on estimators that minimize the error over structured families of convex
sets that are specified as linear images of concisely described sets -- such as
the simplex or the spectraplex -- in a higher-dimensional space that is not
much larger than the ambient space. Convex sets parametrized in this manner are
significant from a computational perspective as one can optimize linear
functionals over such sets efficiently; they serve a different purpose in the
inferential context of the present paper, namely, that of incorporating
regularization in the reconstruction while still offering considerable
expressive power. We provide a geometric characterization of the asymptotic
behavior of our estimators, and our analysis relies on the property that
certain sets which admit semialgebraic descriptions are Vapnik-Chervonenkis
(VC) classes. Our numerical experiments highlight the utility of our framework
over previous approaches in settings in which the measurements available are
noisy or small in number as well as those in which the underlying set to be
reconstructed is non-polyhedral.Comment: 35 pages, 80 figure
Convex Graph Invariant Relaxations For Graph Edit Distance
The edit distance between two graphs is a widely used measure of similarity
that evaluates the smallest number of vertex and edge deletions/insertions
required to transform one graph to another. It is NP-hard to compute in
general, and a large number of heuristics have been proposed for approximating
this quantity. With few exceptions, these methods generally provide upper
bounds on the edit distance between two graphs. In this paper, we propose a new
family of computationally tractable convex relaxations for obtaining lower
bounds on graph edit distance. These relaxations can be tailored to the
structural properties of the particular graphs via convex graph invariants.
Specific examples that we highlight in this paper include constraints on the
graph spectrum as well as (tractable approximations of) the stability number
and the maximum-cut values of graphs. We prove under suitable conditions that
our relaxations are tight (i.e., exactly compute the graph edit distance) when
one of the graphs consists of few eigenvalues. We also validate the utility of
our framework on synthetic problems as well as real applications involving
molecular structure comparison problems in chemistry.Comment: 27 pages, 7 figure
False Discovery and Its Control in Low Rank Estimation
Models specified by low-rank matrices are ubiquitous in contemporary
applications. In many of these problem domains, the row/column space structure
of a low-rank matrix carries information about some underlying phenomenon, and
it is of interest in inferential settings to evaluate the extent to which the
row/column spaces of an estimated low-rank matrix signify discoveries about the
phenomenon. However, in contrast to variable selection, we lack a formal
framework to assess true/false discoveries in low-rank estimation; in
particular, the key source of difficulty is that the standard notion of a
discovery is a discrete one that is ill-suited to the smooth structure
underlying low-rank matrices. We address this challenge via a geometric
reformulation of the concept of a discovery, which then enables a natural
definition in the low-rank case. We describe and analyze a generalization of
the Stability Selection method of Meinshausen and B\"uhlmann to control for
false discoveries in low-rank estimation, and we demonstrate its utility
compared to previous approaches via numerical experiments
Interpreting Latent Variables in Factor Models via Convex Optimization
Latent or unobserved phenomena pose a significant difficulty in data analysis as they induce complicated and confounding dependencies among a collection of observed variables. Factor analysis is a prominent multivariate statistical modeling approach that addresses this challenge by identifying the effects of (a small number of) latent variables on a set of observed variables. However, the latent variables in a factor model are purely mathematical objects that are derived from the observed phenomena, and they do not have any interpretation associated to them. A natural approach for attributing semantic information to the latent variables in a factor model is to obtain measurements of some additional plausibly useful covariates that may be related to the original set of observed variables, and to associate these auxiliary covariates to the latent variables. In this paper, we describe a systematic approach for identifying such associations. Our method is based on solving computationally tractable convex optimization problems, and it can be viewed as a generalization of the minimum-trace factor analysis procedure for fitting factor models via convex optimization. We analyze the theoretical consistency of our approach in a high-dimensional setting as well as its utility in practice via experimental demonstrations with real data
Relative Entropy Relaxations for Signomial Optimization
Signomial programs (SPs) are optimization problems specified in terms of signomials,
which are weighted sums of exponentials composed with linear functionals of a decision variable. SPs
are nonconvex optimization problems in general, and families of NP-hard problems can be reduced
to SPs. In this paper we describe a hierarchy of convex relaxations to obtain successively tighter
lower bounds of the optimal value of SPs. This sequence of lower bounds is computed by solving
increasingly larger-sized relative entropy optimization problems, which are convex programs specified
in terms of linear and relative entropy functions. Our approach relies crucially on the observation
that the relative entropy function, by virtue of its joint convexity with respect to both arguments,
provides a convex parametrization of certain sets of globally nonnegative signomials with efficiently
computable nonnegativity certificates via the arithmetic-geometric-mean inequality. By appealing to
representation theorems from real algebraic geometry, we show that our sequences of lower bounds
converge to the global optima for broad classes of SPs. Finally, we also demonstrate the effectiveness
of our methods via numerical experiments
- β¦